Module 10 Application¶

Challenge: Crypto Clustering¶

In this Challenge, you’ll combine your financial Python programming skills with the new unsupervised learning skills that you acquired in this module.

The CSV file provided for this challenge contains price change data of cryptocurrencies in different periods.

The steps for this challenge are broken out into the following sections:

  • Import the Data (provided in the starter code)
  • Prepare the Data (provided in the starter code)
  • Find the Best Value for k Using the Original Data
  • Cluster Cryptocurrencies with K-means Using the Original Data
  • Optimize Clusters with Principal Component Analysis
  • Find the Best Value for k Using the PCA Data
  • Cluster the Cryptocurrencies with K-means Using the PCA Data
  • Visualize and Compare the Results

Import the Data¶

This section imports the data into a new DataFrame. It follows these steps:

  1. Read the “crypto_market_data.csv” file from the Resources folder into a DataFrame, and use index_col="coin_id" to set the cryptocurrency name as the index. Review the DataFrame.

  2. Generate the summary statistics, and use HvPlot to visualize your data to observe what your DataFrame contains.

Rewind: The Pandasdescribe()function generates summary statistics for a DataFrame.

In [1]:
# Import required libraries and dependencies
import pandas as pd
import hvplot.pandas
from pathlib import Path
from sklearn.cluster import KMeans
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
In [2]:
# Load the data into a Pandas DataFrame
df_market_data = pd.read_csv(
    Path("../data/crypto_market_data.csv"),
    index_col="coin_id")

# Display sample data
df_market_data.head(10)
Out[2]:
price_change_percentage_24h price_change_percentage_7d price_change_percentage_14d price_change_percentage_30d price_change_percentage_60d price_change_percentage_200d price_change_percentage_1y
coin_id
bitcoin 1.08388 7.60278 6.57509 7.67258 -3.25185 83.51840 37.51761
ethereum 0.22392 10.38134 4.80849 0.13169 -12.88890 186.77418 101.96023
tether -0.21173 0.04935 0.00640 -0.04237 0.28037 -0.00542 0.01954
ripple -0.37819 -0.60926 2.24984 0.23455 -17.55245 39.53888 -16.60193
bitcoin-cash 2.90585 17.09717 14.75334 15.74903 -13.71793 21.66042 14.49384
binancecoin 2.10423 12.85511 6.80688 0.05865 36.33486 155.61937 69.69195
chainlink -0.23935 20.69459 9.30098 -11.21747 -43.69522 403.22917 325.13186
cardano 0.00322 13.99302 5.55476 10.10553 -22.84776 264.51418 156.09756
litecoin -0.06341 6.60221 7.28931 1.21662 -17.23960 27.49919 -12.66408
bitcoin-cash-sv 0.92530 3.29641 -1.86656 2.88926 -24.87434 7.42562 93.73082
In [3]:
# Generate summary statistics
df_market_data.describe()
Out[3]:
price_change_percentage_24h price_change_percentage_7d price_change_percentage_14d price_change_percentage_30d price_change_percentage_60d price_change_percentage_200d price_change_percentage_1y
count 41.000000 41.000000 41.000000 41.000000 41.000000 41.000000 41.000000
mean -0.269686 4.497147 0.185787 1.545693 -0.094119 236.537432 347.667956
std 2.694793 6.375218 8.376939 26.344218 47.365803 435.225304 1247.842884
min -13.527860 -6.094560 -18.158900 -34.705480 -44.822480 -0.392100 -17.567530
25% -0.608970 0.047260 -5.026620 -10.438470 -25.907990 21.660420 0.406170
50% -0.063410 3.296410 0.109740 -0.042370 -7.544550 83.905200 69.691950
75% 0.612090 7.602780 5.510740 4.578130 0.657260 216.177610 168.372510
max 4.840330 20.694590 24.239190 140.795700 223.064370 2227.927820 7852.089700
In [4]:
# Plot your data to see what's in your DataFrame
df_market_data.hvplot.line(
    width=800,
    height=400,
    rot=90,
    grid=True
)
Out[4]:

Prepare the Data¶

This section prepares the data before running the K-Means algorithm. It follows these steps:

  1. Use the StandardScaler module from scikit-learn to normalize the CSV file data. This will require you to utilize the fit_transform function.

  2. Create a DataFrame that contains the scaled data. Be sure to set the coin_id index from the original DataFrame as the index for the new DataFrame. Review the resulting DataFrame.

In [5]:
# Use the `StandardScaler()` module from scikit-learn to normalize the data from the CSV file
scaled_data = StandardScaler().fit_transform(df_market_data)
In [6]:
# Create a DataFrame with the scaled data
df_market_data_scaled = pd.DataFrame(
    scaled_data,
    columns=df_market_data.columns
)

# Copy the crypto names from the original data
df_market_data_scaled["coin_id"] = df_market_data.index

# Set the coinid column as index
df_market_data_scaled = df_market_data_scaled.set_index("coin_id")

# Display sample data
df_market_data_scaled.head()
Out[6]:
price_change_percentage_24h price_change_percentage_7d price_change_percentage_14d price_change_percentage_30d price_change_percentage_60d price_change_percentage_200d price_change_percentage_1y
coin_id
bitcoin 0.508529 0.493193 0.772200 0.235460 -0.067495 -0.355953 -0.251637
ethereum 0.185446 0.934445 0.558692 -0.054341 -0.273483 -0.115759 -0.199352
tether 0.021774 -0.706337 -0.021680 -0.061030 0.008005 -0.550247 -0.282061
ripple -0.040764 -0.810928 0.249458 -0.050388 -0.373164 -0.458259 -0.295546
bitcoin-cash 1.193036 2.000959 1.760610 0.545842 -0.291203 -0.499848 -0.270317

Find the Best Value for k Using the Original Data¶

In this section, you will use the elbow method to find the best value for k.

  1. Code the elbow method algorithm to find the best value for k. Use a range from 1 to 11.

  2. Plot a line chart with all the inertia values computed with the different values of k to visually identify the optimal value for k.

  3. Answer the following question: What is the best value for k?

In [7]:
# Create a list with the number of k-values to try
# Use a range from 1 to 11
k = list(range(1, 11))
In [8]:
# Create an empy list to store the inertia values
inertia = []
In [9]:
# Create a for loop to compute the inertia with each possible value of k
# Inside the loop:
for i in k:
    k_model = KMeans(n_clusters=i, random_state=0)     # 1. Create a KMeans model using the loop counter for the n_clusters
    k_model.fit(df_market_data)                        # 2. Fit the model to the data using `df_market_data_scaled`
    inertia.append(k_model.inertia_)                   # 3. Append the model.inertia_ to the inertia list
C:\Users\jersk\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:1332: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
  warnings.warn(
C:\Users\jersk\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:1332: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
  warnings.warn(
C:\Users\jersk\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:1332: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
  warnings.warn(
C:\Users\jersk\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:1332: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
  warnings.warn(
C:\Users\jersk\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:1332: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
  warnings.warn(
C:\Users\jersk\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:1332: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
  warnings.warn(
C:\Users\jersk\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:1332: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
  warnings.warn(
C:\Users\jersk\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:1332: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
  warnings.warn(
C:\Users\jersk\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:1332: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
  warnings.warn(
C:\Users\jersk\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:1332: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
  warnings.warn(
In [10]:
# Create a dictionary with the data to plot the Elbow curve
elbow_data = {"k": k, "inertia": inertia}

# Create a DataFrame with the data to plot the Elbow curve
df_elbow = pd.DataFrame(elbow_data)

# Review the DataFrame
df_elbow.head()
Out[10]:
k inertia
0 1 6.998354e+07
1 2 8.193204e+06
2 3 2.592707e+06
3 4 8.352274e+05
4 5 4.373295e+05
In [11]:
# Plot a line chart with all the inertia values computed with 
# the different values of k to visually identify the optimal value for k.
market_data_elbow_plot = df_elbow.hvplot.line(
    x="k", 
    y="inertia", 
    title="Elbow Curve: Market Data", 
    xticks=k,
    grid=True
).opts(yformatter="%0f")

display(market_data_elbow_plot)

Answer the following question: What is the best value for k?¶

Question: What is the best value for k?

Answer:

* k=2 provides the most value or information
* k=3 will be used for slightly more information 
* k=4 will not be used because information gain is too small 

Cluster Cryptocurrencies with K-means Using the Original Data¶

In this section, you will use the K-Means algorithm with the best value for k found in the previous section to cluster the cryptocurrencies according to the price changes of cryptocurrencies provided.

  1. Initialize the K-Means model with four clusters using the best value for k.

  2. Fit the K-Means model using the original data.

  3. Predict the clusters to group the cryptocurrencies using the original data. View the resulting array of cluster values.

  4. Create a copy of the original data and add a new column with the predicted clusters.

  5. Create a scatter plot using hvPlot by setting x="price_change_percentage_24h" and y="price_change_percentage_7d". Color the graph points with the labels found using K-Means and add the crypto name in the hover_cols parameter to identify the cryptocurrency represented by each data point.

In [12]:
# Initialize the K-Means model using the best value for k
model = KMeans(n_clusters=3)
In [13]:
# Fit the K-Means model using the scaled data
model.fit(df_market_data_scaled)
C:\Users\jersk\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:1332: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
  warnings.warn(
Out[13]:
KMeans(n_clusters=3)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
KMeans(n_clusters=3)
In [14]:
# Predict the clusters to group the cryptocurrencies using the scaled data
market_clusters = model.predict(df_market_data_scaled)

# View the resulting array of cluster values.
print(market_clusters)
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 2
 0 0 0 0]
In [15]:
# Create a copy of the DataFrame
df_market_scaled_predictions = df_market_data_scaled.copy()

# Review the DataFrame
df_market_scaled_predictions.head()
Out[15]:
price_change_percentage_24h price_change_percentage_7d price_change_percentage_14d price_change_percentage_30d price_change_percentage_60d price_change_percentage_200d price_change_percentage_1y
coin_id
bitcoin 0.508529 0.493193 0.772200 0.235460 -0.067495 -0.355953 -0.251637
ethereum 0.185446 0.934445 0.558692 -0.054341 -0.273483 -0.115759 -0.199352
tether 0.021774 -0.706337 -0.021680 -0.061030 0.008005 -0.550247 -0.282061
ripple -0.040764 -0.810928 0.249458 -0.050388 -0.373164 -0.458259 -0.295546
bitcoin-cash 1.193036 2.000959 1.760610 0.545842 -0.291203 -0.499848 -0.270317
In [16]:
# Add a new column to the DataFrame with the predicted clusters
df_market_scaled_predictions["MarketCluster"] = market_clusters

# Display sample data
# df.loc[~df['column_name'].isin(some_values)] -- df.loc[df['B'].isin(['one','three'])]
print('\nCoins in Segments 1 and 2')
print('=========================')
display(df_market_scaled_predictions.loc[df_market_scaled_predictions["MarketCluster"].isin([1,2])])
print('\nCoins in Segment 0')
print('==================')
display(df_market_scaled_predictions.head())
Coins in Segments 1 and 2
=========================
price_change_percentage_24h price_change_percentage_7d price_change_percentage_14d price_change_percentage_30d price_change_percentage_60d price_change_percentage_200d price_change_percentage_1y MarketCluster
coin_id
ethlend -4.981042 -0.045178 -1.206956 -1.212126 0.047736 4.632380 6.088625 1
celsius-degree-token 1.045530 -0.618328 2.907054 5.351455 4.769913 3.148875 1.348488 2
Coins in Segment 0
==================
price_change_percentage_24h price_change_percentage_7d price_change_percentage_14d price_change_percentage_30d price_change_percentage_60d price_change_percentage_200d price_change_percentage_1y MarketCluster
coin_id
bitcoin 0.508529 0.493193 0.772200 0.235460 -0.067495 -0.355953 -0.251637 0
ethereum 0.185446 0.934445 0.558692 -0.054341 -0.273483 -0.115759 -0.199352 0
tether 0.021774 -0.706337 -0.021680 -0.061030 0.008005 -0.550247 -0.282061 0
ripple -0.040764 -0.810928 0.249458 -0.050388 -0.373164 -0.458259 -0.295546 0
bitcoin-cash 1.193036 2.000959 1.760610 0.545842 -0.291203 -0.499848 -0.270317 0
In [17]:
# Create a scatter plot using hvPlot by setting 
# `x="price_change_percentage_24h"` and `y="price_change_percentage_7d"`. 
# Color the graph points with the labels found using K-Means and 
# add the crypto name in the `hover_cols` parameter to identify 
# the cryptocurrency represented by each data point.
market_scaled_predictions_plot = df_market_scaled_predictions.hvplot.scatter(
    x="price_change_percentage_24h",
    y="price_change_percentage_7d",
    by="MarketCluster",
    title="Scatter Plot: Scaled Market Data", 
    grid=True,
    hover_cols=['coin_id']
)

display(market_scaled_predictions_plot)

Optimize Clusters with Principal Component Analysis¶

In this section, you will perform a principal component analysis (PCA) and reduce the features to three principal components.

  1. Create a PCA model instance and set n_components=3.

  2. Use the PCA model to reduce to three principal components. View the first five rows of the DataFrame.

  3. Retrieve the explained variance to determine how much information can be attributed to each principal component.

  4. Answer the following question: What is the total explained variance of the three principal components?

  5. Create a new DataFrame with the PCA data. Be sure to set the coin_id index from the original DataFrame as the index for the new DataFrame. Review the resulting DataFrame.

In [18]:
# Create a PCA model instance and set `n_components=3`.
pca = PCA(n_components=3)
In [19]:
# Use the PCA model with `fit_transform` to reduce to 
# three principal components.
market_pca = pca.fit_transform(df_market_data)

# View the first five rows of the DataFrame. 
market_pca[:5]
Out[19]:
array([[-341.80096268,  -51.36677548,   12.52547089],
       [-249.42046633,   24.11754777,  -14.23146597],
       [-402.61472077, -118.71073742,   24.83839662],
       [-406.75243715,  -79.48728629,    1.56633057],
       [-382.42994789, -103.43195906,   16.75307273]])
In [20]:
# Retrieve the explained variance to determine how much information 
# can be attributed to each principal component.
pca.explained_variance_ratio_
print([ "{:0.5f}".format(x) for x in pca.explained_variance_ratio_ ])
['0.97604', '0.02303', '0.00075']

Answer the following question: What is the total explained variance of the three principal components?¶

Question: What is the total explained variance of the three principal components?

In [21]:
# Let the code answer the question
total = 0

for x in pca.explained_variance_ratio_: 
    total += x
    print(f'Variance: {x * 100: 0.3f}%, Total Variance: {total * 100 :0.2f}%')
    
Variance:  97.604%, Total Variance: 97.60%
Variance:  2.303%, Total Variance: 99.91%
Variance:  0.075%, Total Variance: 99.98%

Answer: Nearly 100% of the variance is explained by these components

In [22]:
# Create a new DataFrame with the PCA data.
# Note: The code for this step is provided for you

# Creating a DataFrame with the PCA data
market_pca_df = pd.DataFrame(
    market_pca,
    columns=["PCA1", "PCA2", "PCA3"]
)

display(market_pca_df.shape)

# Copy the crypto names from the original data
market_pca_df['coin_id'] = df_market_data.index.values
display(market_pca_df.shape)

# Set the coinid column as index
market_pca_df.set_index('coin_id', inplace=True)

# Display sample data
display(market_pca_df)
(41, 3)
(41, 4)
PCA1 PCA2 PCA3
coin_id
bitcoin -341.800963 -51.366775 12.525471
ethereum -249.420466 24.117548 -14.231466
tether -402.614721 -118.710737 24.838397
ripple -406.752437 -79.487286 1.566331
bitcoin-cash -382.429948 -103.431959 16.753073
binancecoin -289.125020 12.287170 34.163848
chainlink 28.151408 154.987995 -73.126506
cardano -174.519832 80.243493 -30.392830
litecoin -406.613342 -91.783029 5.016144
bitcoin-cash-sv -311.219887 -143.285351 6.083080
crypto-com-chain -43.747802 -0.880040 -23.088284
usd-coin -402.864243 -118.838225 24.596322
eos -414.237977 -101.768963 -8.053274
monero -216.818545 8.493823 55.978302
tron -306.196962 -11.952941 4.573263
tezos -256.458443 -127.902916 -21.983027
okb -255.261261 -124.797933 17.121792
stellar -364.332796 -49.102272 -20.410861
cosmos -268.219369 30.466301 -8.224769
cdai -403.023491 -119.107827 24.449027
neo -229.661892 -8.803589 5.686160
wrapped-bitcoin -341.672172 -51.088157 12.046052
leo-token -377.733744 -110.558329 16.228922
huobi-token -366.548441 -87.255215 14.133475
nem -145.551343 37.211474 69.779081
binance-usd -402.470760 -118.671238 24.689086
iota -378.208662 -34.321847 -25.265565
vechain -128.539445 65.675378 -59.716008
zcash -286.959559 -31.013635 -7.990442
theta-token 532.124138 516.120826 -13.315019
dash -403.345732 -117.010568 -3.148999
ethereum-classic -389.140428 -116.155069 0.891223
ethlend 7755.778366 -361.972320 -11.117604
maker -367.838480 -46.016102 -0.000045
havven 431.490135 255.732568 -97.419812
omisego 79.668292 346.529605 3.861393
celsius-degree-token 1993.820798 823.817226 97.948664
ontology -398.571139 -71.939902 -29.341220
ftx-token -224.737127 -114.795270 23.570526
true-usd -402.271474 -118.900574 24.885715
digibyte -82.125235 275.234662 -74.559616

Find the Best Value for k Using the PCA Data¶

In this section, you will use the elbow method to find the best value for k using the PCA data.

  1. Code the elbow method algorithm and use the PCA data to find the best value for k. Use a range from 1 to 11.

  2. Plot a line chart with all the inertia values computed with the different values of k to visually identify the optimal value for k.

  3. Answer the following questions: What is the best value for k when using the PCA data? Does it differ from the best k value found using the original data?

In [23]:
# Create a list with the number of k-values to try
# Use a range from 1 to 11
k = list(range(1, 11))
In [24]:
# Create an empy list to store the inertia values
inertia = []
In [25]:
# Create a for loop to compute the inertia with each possible value of k
# Inside the loop:

for i in k:
    k_model = KMeans(n_clusters=i, random_state=1)     # 1. Create a KMeans model using the loop counter for the n_clusters
    k_model.fit(market_pca_df)                         # 2. Fit the model to the data using `df_market_data_pca`
    inertia.append(k_model.inertia_)                   # 3. Append the model.inertia_ to the inertia list
C:\Users\jersk\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:1332: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
  warnings.warn(
C:\Users\jersk\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:1332: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
  warnings.warn(
C:\Users\jersk\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:1332: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
  warnings.warn(
C:\Users\jersk\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:1332: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
  warnings.warn(
C:\Users\jersk\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:1332: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
  warnings.warn(
C:\Users\jersk\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:1332: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
  warnings.warn(
C:\Users\jersk\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:1332: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
  warnings.warn(
C:\Users\jersk\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:1332: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
  warnings.warn(
C:\Users\jersk\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:1332: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
  warnings.warn(
C:\Users\jersk\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:1332: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
  warnings.warn(
In [26]:
# Create a dictionary with the data to plot the Elbow curve
elbow_data = {"k": k, "inertia": inertia}

# Create a DataFrame with the data to plot the Elbow curve
df_elbow = pd.DataFrame(elbow_data)

# Review the DataFrame
df_elbow.head()
Out[26]:
k inertia
0 1 6.997052e+07
1 2 8.180192e+06
2 3 2.580721e+06
3 4 8.237471e+05
4 5 4.264175e+05
In [27]:
# Plot a line chart with all the inertia values computed with 
# the different values of k to visually identify the optimal value for k.
pca_elbow_plot = df_elbow.hvplot.line(
    x="k", 
    y="inertia", 
    title="Elbow Curve: PCA Data", 
    xticks=k,
    grid=True
).opts(yformatter="%0f")

display(pca_elbow_plot)

Answer the following questions: What is the best value for k when using the PCA data? Does it differ from the best k value found using the original data?¶

  • Question: What is the best value for k when using the PCA data?
    • Answer:
      • k=2 provides the most value or information
      • k=3 will be used for slightly more information
      • k=4 will not be used because information gain is too small
  • Question: Does it differ from the best k value found using the original data?
    • Answer:
      • The anser is the same as for the origional data

Cluster Cryptocurrencies with K-means Using the PCA Data¶

In this section, you will use the PCA data and the K-Means algorithm with the best value for k found in the previous section to cluster the cryptocurrencies according to the principal components.

  1. Initialize the K-Means model with four clusters using the best value for k.

  2. Fit the K-Means model using the PCA data.

  3. Predict the clusters to group the cryptocurrencies using the PCA data. View the resulting array of cluster values.

  4. Add a new column to the DataFrame with the PCA data to store the predicted clusters.

  5. Create a scatter plot using hvPlot by setting x="PC1" and y="PC2". Color the graph points with the labels found using K-Means and add the crypto name in the hover_cols parameter to identify the cryptocurrency represented by each data point.

In [28]:
# Initialize the K-Means model using the best value for k (k = 3)
model = KMeans(n_clusters=3, random_state=0)
In [29]:
# Fit the K-Means model using the PCA data
model.fit(market_pca_df)
C:\Users\jersk\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:1332: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
  warnings.warn(
Out[29]:
KMeans(n_clusters=3, random_state=0)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
KMeans(n_clusters=3, random_state=0)
In [30]:
# Predict the clusters to group the cryptocurrencies using the PCA data
k_3 = model.predict(market_pca_df)

# View the resulting array of cluster values.
display(k_3)
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 2, 0, 0, 0, 0])
In [31]:
# Create a copy of the DataFrame with the PCA data
model_pca_predictions_df = market_pca_df.copy()

# Add a new column to the DataFrame with the predicted clusters
model_pca_predictions_df["market_segments"] = k_3

# Display sample data
# df.loc[~df['column_name'].isin(some_values)] -- df.loc[df['B'].isin(['one','three'])]
print('\nCoins in Segments 1 and 2')
print('=========================')
display(model_pca_predictions_df.loc[df_market_scaled_predictions["MarketCluster"].isin([1,2])])

print('\nCoins in Segment 0')
print('==================')
display(model_pca_predictions_df.head())
Coins in Segments 1 and 2
=========================
PCA1 PCA2 PCA3 market_segments
coin_id
ethlend 7755.778366 -361.972320 -11.117604 1
celsius-degree-token 1993.820798 823.817226 97.948664 2
Coins in Segment 0
==================
PCA1 PCA2 PCA3 market_segments
coin_id
bitcoin -341.800963 -51.366775 12.525471 0
ethereum -249.420466 24.117548 -14.231466 0
tether -402.614721 -118.710737 24.838397 0
ripple -406.752437 -79.487286 1.566331 0
bitcoin-cash -382.429948 -103.431959 16.753073 0
In [32]:
# Create a scatter plot using hvPlot by setting 
# `x="PC1"` and `y="PC2"`. 
# Color the graph points with the labels found using K-Means and 
# add the crypto name in the `hover_cols` parameter to identify 
# the cryptocurrency represented by each data point.
model_pca_predictions_plot = model_pca_predictions_df.hvplot.scatter(
    x="PCA1",
    y="PCA2",
    by="market_segments",
    title="Scatter Plot: PCA Predictions Data", 
    grid=True,
    hover_cols=['coin_id']
)
display(model_pca_predictions_plot)

Visualize and Compare the Results¶

In this section, you will visually analyze the cluster analysis results by contrasting the outcome with and without using the optimization techniques.

  1. Create a composite plot using hvPlot and the plus (+) operator to contrast the Elbow Curve that you created to find the best value for k with the original and the PCA data.

  2. Create a composite plot using hvPlot and the plus (+) operator to contrast the cryptocurrencies clusters using the original and the PCA data.

  3. Answer the following question: After visually analyzing the cluster analysis results, what is the impact of using fewer features to cluster the data using K-Means?

Rewind: Back in Lesson 3 of Module 6, you learned how to create composite plots. You can look at that lesson to review how to make these plots; also, you can check the hvPlot documentation.

In [33]:
# Composite plot to contrast the Elbow curves
market_data_elbow_plot + pca_elbow_plot
Out[33]:
In [34]:
# Compoosite plot to contrast the clusters
market_scaled_predictions_plot + model_pca_predictions_plot
Out[34]:

Answer the following question: After visually analyzing the cluster analysis results, what is the impact of using fewer features to cluster the data using K-Means?¶

  • Question: After visually analyzing the cluster analysis results, what is the impact of using fewer features to cluster the data using K-Means?

  • Answer:

    • There is no differnece in the Elbow Curves --> We make the same decision and choose the same k.
    • The PCA scatter plot makes it clearer why ethlend and celcious-degree-tolken in particular are in their own clusters.
In [ ]: